As the number of distributed services (or microservices) of cloud-native applications grows, resource management becomes a challenging task. These applications tend to be user-facing and latency-sensitive, and our goal is to continuously minimize the amount of CPU resources allocated while still satisfying the application latency SLO. Although previous efforts have proposed simple heuristics and sophisticated ML-based techniques, we believe that a practical resource manager should accurately scale CPU resources for diverse applications, with minimum human efforts and operation overheads. To this end, we ask: can we systematically break resource management down to subproblems solvable by practical policies? Based on the notion of CPU-throttle-based performance target, we decouple the mechanisms of SLO feedback and resource control, and implement a two-level framework -- Autothrottle. It combines a lightweight learned controller at the global level, and agile per-microservice controllers at the local level. We evaluate Autothrottle on three microservice applications, with both short-term and 21-day production workload traces. Empirical results show Autothrottle's superior CPU core savings up to 26.21% over the best-performing baselines across applications, while maintaining the latency SLO.
translated by 谷歌翻译
Our education system comprises a series of curricula. For example, when we learn mathematics at school, we learn in order from addition, to multiplication, and later to integration. Delineating a curriculum for teaching either a human or a machine shares the underlying goal of maximizing the positive knowledge transfer from early to later tasks and minimizing forgetting of the early tasks. Here, we exhaustively surveyed the effect of curricula on existing continual learning algorithms in the class-incremental setting, where algorithms must learn classes one at a time from a continuous stream of data. We observed that across a breadth of possible class orders (curricula), curricula influence the retention of information and that this effect is not just a product of stochasticity. Further, as a primary effort toward automated curriculum design, we proposed a method capable of designing and ranking effective curricula based on inter-class feature similarities. We compared the predicted curricula against empirically determined effectual curricula and observed significant overlaps between the two. To support the study of a curriculum designer, we conducted a series of human psychophysics experiments and contributed a new Continual Learning benchmark in object recognition. We assessed the degree of agreement in effective curricula between humans and machines. Surprisingly, our curriculum designer successfully predicts an optimal set of curricula that is effective for human learning. There are many considerations in curriculum design, such as timely student feedback and learning with multiple modalities. Our study is the first attempt to set a standard framework for the community to tackle the problem of teaching humans and machines to learn to learn continuously.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
Open-World实例细分(OWIS)旨在从图像中分割类不足的实例,该图像具有广泛的现实应用程序,例如自主驾驶。大多数现有方法遵循两阶段的管道:首先执行类不足的检测,然后再进行特定于类的掩模分段。相比之下,本文提出了一个单阶段框架,以直接为每个实例生成掩码。另外,实例掩码注释在现有数据集中可能很吵。为了克服这个问题,我们引入了新的正规化损失。具体而言,我们首先训练一个额外的分支来执行预测前景区域的辅助任务(即属于任何对象实例的区域),然后鼓励辅助分支的预测与实例掩码的预测一致。关键的见解是,这种交叉任务一致性损失可以充当误差校正机制,以打击注释中的错误。此外,我们发现所提出的跨任务一致性损失可以应用于图像,而无需任何注释,将自己借给了半监督的学习方法。通过广泛的实验,我们证明了所提出的方法可以在完全监督和半监督的设置中获得令人印象深刻的结果。与SOTA方法相比,所提出的方法将$ ap_ {100} $得分提高了4.75 \%\%\%\ rightarrow $ uvo设置和4.05 \%\%\%\%\%\%\ rightarrow $ uvo设置。在半监督学习的情况下,我们的模型仅使用30 \%标记的数据学习,甚至超过了其完全监督的数据,并具有5​​0 \%标记的数据。该代码将很快发布。
translated by 谷歌翻译
我们提出了分支机构 - 培训 - 合并(BTM),这是一种用于对大型语言模型(LLMS)平行训练的沟通效率算法。我们表明,有可能在不同的数据子集上独立训练新的LLMS的子部分,从而消除了训练LLMS当前所需的大量多节点同步。 BTM学习了一组独立的专家LMS(ELMS),每个LMS(ELMS)专门针对不同的文本领域,例如科学或法律文本。可以添加和删除这些榆树以更新数据覆盖范围,并结合概括为新域,或者平均折叠回到单个LM以进行有效推理。通过从当前集合中的(混合物)分支,进一步训练新域的数据参数,然后将结果模型归还到该集合以备将来使用,从而学习新的榆树。实验表明,在控制训练成本时,与GPT型变压器LMS相比,BTM改善了与GPT风格的变压器LMS相比,可以改善内部和外部困惑。通过广泛的分析,我们表明这些结果对不同的ELM初始化方案是可靠的,但需要专家领域的专业化。具有随机数据拆分的LM合奏表现不佳。我们还提出了将BTM缩放到64个领域的新语料库(总计192B居民分开的代币)的研究;所得的LM(22.4B总参数)以及经过2.5倍计算训练的变压器LM。这些收益随域的数量增长,表明可以使用更具侵略性的并行性来有效地在未来的工作中训练更大的模型。
translated by 谷歌翻译
监管压力测试已成为在美国最大银行设定资本要求的主要工具。美联储使用机密模型来评估在共同的压力方案中针对银行特定投资组合的特定银行成果。作为政策,尽管机构之间存在相当多的异质性,但所有银行都使用相同的模型;单个银行认为,某些模型不适合其业务。在这场辩论中,我们问,单独量身定制的模型的合理聚集是什么?我们认为,简单地跨银行汇总数据平等对待银行,但会遭受两个缺陷的影响:它可能会扭曲合法投资组合功能的影响,并且很容易受到隐含的合法信息的隐含误导来推断银行身份。我们比较了回归公平的各种概念,以解决这些缺陷,考虑到预测准确性和平等待遇。在线性模型的设置中,我们主张估算,然后丢弃中心的银行固定效果,这是可取的,而不是简单地忽略整个银行的差异。我们提供证据表明总体影响可能是重要的。我们还讨论了非线性模型的扩展。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
语义表示对于视频文本跟踪(VTT)任务具有很大的益处,该任务需要同时对视频中的视频进行分类,检测和跟踪文本。大多数现有方法通过在连续帧中的外观相似性来解决此任务,同时忽略丰富的语义功能。在本文中,我们探讨了具有对语义和视觉表示的对比学习的强大追踪视频文本。相应地,我们介绍了一个具有语义和视觉表示(SVREP)的端到端视频文本跟踪器,它通过在视频序列中利用不同文本之间的视觉和语义关系来检测和跟踪文本。此外,通过轻量级架构,SVREP在保持竞争推断速度的同时实现最先进的性能。具体而言,使用Reset-18的骨干,SVREP实现了$ \ textbf {65.9 \%} $的$ \ textbf {65.9 \%} $,以$ \ textbf {16.7} $ fps,在ICDAR2015上运行(视频)与$ \ textbf {8.6 \%} $提高的数据集比以前的最先进的方法。
translated by 谷歌翻译
自我关注已成为最近网络架构的一个组成部分,例如,统治主要图像和视频基准的变压器。这是因为自我关注可以灵活地模拟远程信息。出于同样的原因,研究人员最近使尝试恢复多层Perceptron(MLP)并提出一些类似MLP的架构,显示出极大的潜力。然而,当前的MLP样架构不擅长捕获本地细节并缺乏对图像和/或视频中的核心细节的逐步了解。为了克服这个问题,我们提出了一种新颖的Morphmlp架构,该架构专注于在低级层处捕获本地细节,同时逐渐改变,以专注于高级层的长期建模。具体地,我们设计一个完全连接的层,称为Morphfc,两个可变过滤器,其沿着高度和宽度尺寸逐渐地发展其接收领域。更有趣的是,我们建议灵活地调整视频域中的Morphfc层。为了我们最好的知识,我们是第一个创建类似MLP骨干的用于学习视频表示的骨干。最后,我们对图像分类,语义分割和视频分类进行了广泛的实验。我们的Morphmlp,如此自我关注的自由骨干,可以与基于自我关注的型号一样强大。
translated by 谷歌翻译